专利摘要:
The invention relates to a method of authenticating a face presented to a terminal comprising the steps in which: the imager acquires an initial image of a face, the treatment unit determines a pose of the face from the initial image, and determines a reference pose and a target position placed randomly or pseudo-randomly in a target space, - the screen displays a displayed image (6) comprising at least one orientation visual cue (7) initially at a reference position, and a visual target (8) at the target position, updated by moving the visual orientation mark (7) according to the successive poses of the face, - the processing unit authenticates the presented face if coincidence of the position of the visual orientation mark (7) and the target position where the visual target (8) is.
公开号:FR3077658A1
申请号:FR1850967
申请日:2018-02-06
公开日:2019-08-09
发明作者:Julien Doublet;Jean Beaudet
申请人:Idemia Identity and Security France SAS;
IPC主号:
专利说明:

METHOD FOR AUTHENTICATING A FACE
BACKGROUND AND TECHNOLOGICAL BACKGROUND
The present invention belongs to the field of biometrics, and more specifically relates to a method for authenticating a face.
Certain methods of identification or identity verification requires acquiring an image of the face of the person wishing to claim an identity. These may, for example, be biometric identification methods, based on the analysis of elements of the face in order to carry out an identification. It may also involve comparing the person's face with photographs identifying them, in particular when submitting identity documents such as a passport. Finally, access control methods based on facial recognition have recently appeared, in particular for unlocking a smart cell phone, as in patent US9477829.
However, the implementation of these methods requires to guard against fraud consisting in presenting to the imager acquiring the image of the face a reproduction of the face, such as a photograph. To this end, methods have been developed for authenticating the face, that is to say for detecting possible fraud. Most of these procedures are based on the analysis of an imposed movement, which is generally called a challenge. Thus, for example, the person whose face is presented to the imager is asked to perform specific actions such as blinking, smiling or nodding. However, such methods have proven to be vulnerable to fraud based on the presentation of videos in which a face performs the requested challenge.
PRESENTATION OF THE INVENTION
The object of the invention is to remedy these drawbacks at least in part and preferably all of them, and aims in particular to propose a method for authenticating a face presented to a terminal making it possible to detect fraud, which is both simple , robust, and effective against the presentation of videos.
To this end, a method for authenticating a face presented to a terminal is proposed, comprising an imager, a screen and a processing unit, comprising the steps in which:
- the imager acquires at least an initial image of a face presented in its field of acquisition,
the processing unit determines a pose of the face from the initial image, and from this pose of the face determines a reference pose, a pose of the face being defined by at least two angles representative of the orientation of the face appearing in an image, and a target position placed randomly or pseudo-randomly in a target space at a distance from a reference position corresponding to the reference pose, said target position corresponding to a target pose,
the screen displays a displayed image comprising at least one visual orientation marker initially at the reference position, and a visual target at the target position,
- the imager acquires a flow of images of the face presented in its field of acquisition, and for a plurality of successive images of said flow of images:
the processing unit determines successive positions of the visual orientation marker as a function of the successive poses of the face with respect to the reference pose in the plurality of successive images,
the image displayed by the screen is updated by moving the visual orientation marker according to its determined successive positions,
the processing unit authenticates the face presented when there is a coincidence of the position of the visual orientation marker and of the target position where the visual target is located, corresponding to a coincidence of the pose of the face and of the pose target, otherwise the face presented in the scope of acquisition is considered fraud.
The process is advantageously supplemented by the following characteristics, taken alone or in any of their technically possible combinations:
- the target space extends on either side of the reference position;
the target space corresponds to angular ranges of the laying angles defining a plurality of possible target positions;
the angles representative of the orientation of the face appearing in the image include a yaw angle around a vertical axis and a trim angle around a horizontal axis, the angular ranges extending between ± 10 ° and ± 20 ° from the angles of the reference pose;
the processing unit defines the target space also as a function of user data and / or of elements contained in the initial image;
- a time is allocated at the end of which, in the absence of coincidence between the position of the visual orientation marker and the target position where the visual target is, otherwise the face presented in the field of acquisition is considered to be fraud ;
the coincidence between the position of the visual orientation marker and the target position where the visual target is located needs to be maintained for a predetermined period before the treatment unit authenticates the face;
- when the processing unit determines a reference pose, the processing unit transmits an order of movement of the face if the pose of the face does not correspond to a front view of the face;
the authentication of the face presented is also conditioned by the determination of the three-dimensional appearance of the face presented by implementing a photogrammetric reconstruction technique based on the movement from at least two images of the face of the corresponding image stream at two different poses;
the processing unit determines for the plurality of successive images of the flow of images, a direction of the look of the face by identifying the orientation of the eyes of said face on the acquired images, the authentication of the face presented is also conditioned on a look towards the visual orientation marker and / or the visual target.
The invention also relates to a computer program product comprising program code instructions recorded on a non-transient medium usable in a computer for the execution of the steps of a method according to the invention when said program is executed on a computer. using said non-transient support.
The invention finally relates to a terminal comprising an imager, a screen and a processing unit, said terminal being configured to implement a method according to the invention.
PRESENTATION OF THE FIGURES
The invention will be better understood from the following description, which relates to embodiments and variants according to the present invention, given by way of nonlimiting examples and explained with reference to the appended schematic drawings, in which :
FIG. 1 shows a block diagram of steps implemented in the authentication method,
FIG. 2 schematically shows a person presenting his face to a terminal during the implementation of the method,
FIG. 3 schematically shows the angles defining the pose of a face,
FIGS. 4a and 4b show schematic examples of images displayed during a successful authentication, the face presented being considered as an authentic face,
- Figures 5a and 5b show schematic examples of images displayed during a failed authentication, the face presented being considered as fraud.
DETAILED DESCRIPTION
With reference to FIG. 2, the authentication method is implemented by means of a terminal 1 to which a face 2 of a user is presented. The terminal 1 includes a processing unit, an imager 3 adapted to acquire images of objects presented in its field of acquisition 4, and a screen 5 capable of displaying images to the user. More precisely, the terminal 1 is configured so that the user can simultaneously present his face 2 in the field of acquisition 4 of the imager 3, and look at the screen 5. The terminal 1 can thus be for example a terminal pocket type smartphone, which typically has an adequate configuration between imager 3 and screen 5. The terminal
I can nevertheless be any type of computer terminal, and can in particular be a fixed terminal dedicated to identity checks, for example installed in an airport. The processing unit comprises at least one processor and a memory, and makes it possible to execute a computer program for the implementation of the method.
The user presents his face 2 in the acquisition field 4 of the imager 3. The imager 3 acquires at least one initial image of the face 2 (step S01). From the image of the face, the processing unit determines a pose of the face, that is to say its orientation. As illustrated in FIG. 3, the orientation of the face can be described by three angles translating rotations around axes defined by the common configuration of the faces. Indeed, a face has a bottom (in the direction of the neck), a top (the forehead / hair), a face where the mouth and eyes are entirely visible, and two sides where the ears are located. The different elements making up a face are distributed according to obvious geometric criteria in a front view: the mouth is below the nose, the eyes are on the same horizontal line, the ears are also on the same horizontal line, the eyebrows are at - above the eyes, etc ...
It is therefore easy to define for the face, from its image, an orientation relative to the terminal 1 based on the rotation of the face about a vertical axis, and at least one horizontal axis. FIG. 3 shows a typical example of locating the orientation of a face as a function of three angles around three orthogonal axes: a yaw angle around a vertical axis 20, a trim angle around a first horizontal axis 21, and a heeling angle around a second horizontal axis 22. As regards faces, the corresponding English terminology is generally used: yaw for the yaw angle, pitch for the attitude angle, and roll for the angle of heel. The rotation around the vertical axis 20 corresponds to the rotation of the head from left to right. The first horizontal axis 21 corresponds to a horizontal axis around which the head rotates during a nod of the head. The second horizontal axis 22 corresponds to a horizontal axis included in a plane of symmetry of the face, cutting the nose and the mouth and separating the eyes. The rotation around this second horizontal axis 22 corresponds to a leaning of the head to the left or to the right.
If we can use the three angles to define the pose, the angle of heel can however not be used, because of the small amplitude of rotation of a face in this direction, and the discomfort that this rotation can provide. The pose can therefore be defined by at least two angles representative of the orientation of the face appearing in the image, which are preferably the yaw angle and the trim angle.
There are many known methods for estimating the pose of a face from one or more images of this face. We can for example use a pose estimation method based on deep learning implemented by means of a deconvolutional neuron network. The article Head Pose Estimation in the Wild using Convolutional Neural Networks and Adaptive Gradient Methods by M. Patacchiola and A. Cangelosi, Pattern Récognition vol.71, November 2017, pages 132-143, presents an example of a method that can be implemented to determine the pose of the face.
Once the pose of the face is determined from the image of the face, the processing unit determines a reference pose (step S02). This reference pose will then be used as a reference for the facial poses. This reference pose can correspond directly to the pose of the face. Preferably, however, the reference pose must meet certain characteristics, and in particular have the face sufficiently facing to allow proper application of the authentication process, and possibly of the identification process. Thus, when the processing unit determines a reference pose, the processing unit transmits an order of movement of the face if the pose of the face does not correspond to a front view of the face. We consider here that the front view implies yaw and trim angles less than 5 ° relative to an exact alignment of the second horizontal axis 22 (i.e. the axis of the nose), with the direction of shooting. The movement order is communicated to the user by any means, such as for example by the display on the screen 5 of an instruction text asking the user to stand in front of the imager 3, or a audio message.
This reference pose corresponds to a reference position, which will be used later as a position reference. For reasons of simplicity, it is possible to define the reference position as being a central position. In fact, the reference position is preferably predetermined and immutable. It is however possible to provide for defining the reference position as a function of the content of the initial image acquired, for example to make it correspond with an element of the face. The processing unit also determines a target position.
Screen 5 displays a displayed image comprising at least one visual orientation marker initially at the reference position, and a visual target at the target position (step S03). Figure 4a shows an example of such a displayed image 6. As illustrated, the displayed image 6 preferably includes the representation of the face previously acquired by the imager 3. There is a visual orientation marker 7 figured therein. here by a black circle. Other shapes, patterns or colors can be considered. For example, if it is desired to exploit the angle of heel, it is preferable then to use a visual orientation marker 7 having an asymmetry by any rotation, for example a visual orientation marker 7 having a pattern such as a cross or a square shape, so that the change in heel of the face 2 can be translated by a rotation of the visual orientation marker 7 which is visually perceptible. In the illustrated case, the reference position where the visual orientation marker 7 is initially located is in the center of the image.
There is also the visual target 8, here represented by three concentric circles, placed at a target position. The target position, and therefore this visual target 8, is placed randomly or pseudo-randomly in a target space at a distance from the reference position. FIG. 4a shows an example of target space 9, represented by two concentric dotted ovals centered on the reference position. Many other target spaces are possible. It should be noted that the target space does not appear in the displayed image 6 that the screen 5 displays, and that this representation is given only for the purpose of illustration. The target space 9 represents the set of possible target positions.
This target space 9 corresponds to angular ranges of the laying angles defining a plurality of possible target positions. There is indeed an equivalence between the angles of exposure and the positions in the displayed image 6. This equivalence can be interpreted as a change of coordinate system. Thus, by noting X and Y the coordinates of a position in the displayed image, we can express X and Y as a function of the yaw and trim angles, by considering the reference position in the center of the image and the reference pose as presenting yaw and trim angles:
k ± x yaw angle image width
X = ------—------ x image width -I --------- k 2 x trim angle image height
Y = --------—-------- x image height -I -------- with ki and k 2 amplification factors, which can be equal, and angles in degrees. This is a nonlimiting example, other formulas can be used, for example with angles in radians, different maximum angles (here ± 90 °), or a non-centered reference position. It is even possible to use non-linear formulas.
There is bijection between a position on the displayed image and a pose of the face. Thus, the target position corresponds to a target pose. It is moreover possible that the processing unit determines a target pose in a target space 9 (this time angular), and deduces a target position therefrom. The user will have to modify the pose of his face 2 so that it corresponds to the target pose. Due to the link between a position in the image and a pose of the face, it is preferable to restrict the possible target positions to target poses which can comfortably be carried out by a user, that is to say to restrict the angular ranges. which corresponds to the target space 9. In addition, it is preferable to restrict the possible target positions to target poses requiring a sufficiently significant change of pose, that is to say a target position at a distance from the reference position. Consequently, the angular ranges defining the target space 9 preferably extend between ± 10 ° and ± 20 ° relative to the angles of the reference pose, at least for the two angles used (yaw angle and angle of plate). Preferably, the target space 9 extends on either side of the reference position, for example to the right and to the left, and not on one side.
It is possible for the processing unit to define the target space 9 also as a function of user data and / or of elements contained in the image, for example by restricting the target space 9. It is for example possible to modify the location or the extent of the target space 9 as a function of the arrangement of the face in the initial image. It is also possible to adapt the target space 9 to take into account the physical characteristics of the user, such as his size, his age or a possible handicap. The target space 9 is preferably defined relative to the reference position, it is possible to modify the target space 9 by moving the reference position, which can also be placed according to user data and / or elements contained in the image.
The target space 9 has continuity at least in pieces, that is to say that it covers one or more areas on the displayed image 6. There are therefore a large number of possible target positions, at least more than 100, or even more than 1000. In fact, the continuity of the target space 9 covers an almost infinite number of possible target positions. As the target position is placed randomly or pseudo-randomly in the target space 9, the target position changes each time the method is implemented. It is therefore not possible for a fraudster to predict in advance where the target position will be, or to predict all possible target positions. As a result, it is inoperative to present to imager 3 a video representing a face taking target poses based on previous implementations of the process, since the target position varies between each iteration of the process.
After the displayed image 6 has appeared on the screen 5, the user must make the position of the visual orientation marker 7 coincide with the target position, by moving the visual orientation marker 7 to the visual target 8. To do this, the user modifies the pose of his face, a modification reflected by the displacement of the visual orientation marker. The imager 3 then acquires a stream of images of the face 2 presented in his field of acquisition. 4, and the processing unit analyzes a plurality of successive images of said flow of images in order to update the displayed image 6 to report the displacement of the visual orientation marker 7 reflecting the modifications in the pose of the face 2.
Thus, for each of the successive images, the processing unit determines the pose of the face appearing in the image (step S04). The face pose is determined in relation to the reference pose. From the pose of the face, the processing unit determines an updated position of the visual orientation marker 7 in the displayed image 6 (step S05). The image displayed 6 by the screen 3 is updated by moving the visual orientation marker according to its updated position (step S06). As illustrated, it is preferable to also display each successive image acquired, so that the user can see his face on the screen 5. The image flow corresponds in fact to a video, and the repetition of the procedure tracks the frame rate of the video.
This procedure is repeated as long as authentication or fraud detection conditions are not met. At each image or at certain time intervals, the processing unit verifies the authentication or fraud detection conditions are met. In particular, the processing unit checks whether there is a coincidence of the position of the visual orientation marker and the target position (step S07), which corresponds to a coincidence of the pose of the face and of the target pose. . It is understood that the coincidence must be understood with a tolerance interval around the target position, it would not be reasonable to require a tolerance to the nearest pixel. Preferably, the visual target 8 has a certain surface, and it is considered that there is a coincidence when the visual orientation marker 7, or at least a part of it, covers at least a part of the surface of the visual target 8.
Preferably, it is required that the entire visual orientation marker 7 covers at least part of the surface of the visual target 8. This is the case illustrated in FIG. 4b.
When there is a coincidence of the position of the visual orientation mark and the target position, the processing unit authenticates the face presented 2 as corresponding to an authentic face, that is to say not being a fraud. However, if this coincidence is a necessary condition, it is preferably not sufficient. It is preferable to add stability and time criteria conditioning the authentication of the face. As an example of the stability criterion, it can be foreseen that the coincidence of the position of the visual orientation mark and the target position needs to be maintained for a predetermined period before the processing unit authenticates the face presented. 2. Such a predetermined duration may for example be greater than 0.5 seconds, and preferably be greater than 1 second, or even 1.5 or 2 seconds. Thus, a fraud based on the presentation of a video where a face would quickly perform different poses in the hope that one of them corresponds to the target pose would not be effective, since holding each pose for the predetermined duration would be too much long. In addition, one thus protects against an accidental coincidence which would transiently occur during a movement of the face presented when the target position is on the path of the visual orientation reference frame 7.
The authentication of the face can also be conditioned by the determination of the three-dimensional appearance of the face presented 2 by implementing a photogrammetric reconstruction technique based on the movement from at least two images of the face of the corresponding image stream. at two different poses. One can in particular implement a technique of cartography and simultaneous localization, or SLAM, of the English simultaneous localization and mapping. Other conditions may result from the implementation of different methods of fraud detection, in particular for detecting fraudulent artefacts such as moiré detection, representative of the artificial appearance of the face presented 2.
The direction of the gaze can also make it possible to improve the methods of authenticating the face. Typically if the gaze is not directed towards the visual orientation marker 7 or the visual target 8, the face can be considered as fraudulent. Indeed, in order to move the visual orientation marker 7, it is necessary for the user on the one hand to locate the visual target 8 at the target position, and therefore to look at it, at least at the start, then to control the movement of the visual orientation mark 7 by looking at it. Thus, if the gaze of the user's face were to be directed in a direction other than one or the other of these positions (which normally tend to approach), such a singularity would constitute a strong indication of what the face is not authentic.
Consequently, the processing unit preferably determines, for the plurality of successive images of the flow of images, a direction of the look of the face by identifying the orientation of the eyes of said face, and the authentication of the face presented 2 is then also conditioned to a look of the face towards the visual orientation marker and / or the visual target. If the look of the face is done in a direction too far from the visual orientation marker and / or the visual target, the face presented 2 is not authenticated. It is even possible to take into account the correlation between the displacement of the direction of the gaze of the face and successive positions of the visual orientation marker to estimate whether the monitoring of the displacement of the visual orientation marker by the gaze corresponds to a face authentic.
In addition, a time is allocated at the expiration of which, in the absence of coincidence between the position of the visual orientation reference frame 7 and the target position, otherwise the face 2 presented in the field of acquisition 4 is considered to be a fraud. For example, this time allowed may be less than 10 seconds, or even less than 5 seconds, counted between the appearance of the visual target 8 and the end of the process. This time limit makes it possible to limit the execution of the process temporally, and in particular makes it possible to avoid fraud based on the presentation of a video where a face performs different poses in the hope that one of them corresponds to the pose. target, because of the time necessary to do this, all the more if the maintenance of the target pose for a predetermined period is required.
Thus, if the authentication conditions, including a coincidence of the position of the visual orientation marker and the target position, are met, the processing unit authenticates the face presented 2 (step S08). The authentication of the face presented 2 can then be used to continue an identification carried out in parallel.
Conversely, if at the expiration of a given time the authentication conditions have not been met, the face is considered as fraud. Thus, in the example of FIGS. 5a and 5b, the visual target 8 is placed at the top right of the displayed image 6. The reference position, where the visual orientation reference mark 7 is initially positioned, is central, as Figure 4a. In response to the display of the image displayed 6 by the screen 5, the face presented 2 performs a change of pose which brings the visual orientation marker 7 to the left of the displayed image 6. This is of the location which was that of the visual target 8 in the example of FIGS. 4a and 4b. It is therefore probably a fraud based on the presentation of a video replaying a previous response to the challenge proposed by the process. However, as the positioning of the visual target 8 changes with each implementation of the method, and there are almost infinite possible target positions, knowledge of a previous implementation of the method does not allow the fraudster to authenticate the presented face 2. In this case, as the visual orientation marker 7 clearly does not coincide with the visual target 8, when the allotted time has elapsed, the challenge is considered to have failed and face 2 presented in the field 4 is considered fraud.
In such a case, a fraud alert can be set up, for example by preventing an identification carried out in parallel, and / or alerting security personnel. It is also possible to provide information to the user informing him of the failure to authenticate the face presented. Since this is a first failure, it is possible to implement the process again in order to give the user a second chance.
The invention is not limited to the embodiment described and shown in the 5 attached figures. Modifications remain possible, in particular from the point of view of the constitution of the various technical characteristics or by substitution of technical equivalents, without thereby departing from the scope of protection of the invention.
权利要求:
Claims (12)
[1" id="c-fr-0001]
claims
1. Method for authenticating a face presented (2) to a terminal (1) comprising an imager (3), a screen (5) and a processing unit, comprising the steps in which:
- the imager (3) acquires (S01) at least one initial image of a face presented in its field of acquisition (4),
the processing unit determines a pose of the face from the initial image, and from this pose of the face determines a reference pose (S02), a pose of the face being defined by at least two angles representative of the orientation of the face appearing in an image, and a target position placed randomly or pseudo-randomly in a target space (9) at a distance from a reference position corresponding to the reference pose, said target position corresponding to a target pose,
the screen (5) displays (S03) a displayed image (6) comprising at least one visual orientation marker (7) initially at the reference position, and a visual target (8) at the target position,
- the imager (3) acquires a stream of images of the face presented (2) in its field of acquisition (4), and for a plurality of successive images of said stream of images:
the processing unit determines (S05) successive positions of the visual orientation marker (7) as a function of the successive poses of the face with respect to the reference pose in the plurality of successive images,
the image displayed (6) by the screen (5) is updated (S06) by moving the visual orientation marker (7) according to its determined successive positions,
the processing unit authenticates the face presented (2) when there is a coincidence of the position of the visual orientation marker (7) and of the target position where the visual target (8) is located, corresponding to a coincidence face pose and target pose, otherwise the face presented (2) in the field of acquisition (4) is considered as fraud.
[2" id="c-fr-0002]
2. Method according to the preceding claim, wherein the target space (9) extends on either side of the reference position.
[3" id="c-fr-0003]
3. Method according to one of the preceding claims, in which the target space (9) corresponds to angular ranges of the laying angles defining a plurality of possible target positions.
[4" id="c-fr-0004]
4. Method according to the preceding claim, wherein the angles representative of the orientation of the face appearing in the image comprise a yaw angle around a vertical axis and a trim angle around a horizontal axis, the ranges angular extending between ± 10 ° and ± 20 ° relative to the angles of the reference pose.
[5" id="c-fr-0005]
5. Method according to one of the preceding claims, in which the processing unit defines the target space (9) also as a function of user data and / or of elements contained in the initial image.
[6" id="c-fr-0006]
6. Method according to any one of the preceding claims, in which a time is allocated at the expiration of which, in the absence of coincidence between the position of the visual orientation marker (7) and the target position where the visual target (8), otherwise the face presented in the field of acquisition is considered as fraud.
[7" id="c-fr-0007]
7. Method according to any one of the preceding claims, in which the coincidence between the position of the visual orientation marker (7) and the target position where the visual target (8) is located needs to be maintained for a predetermined period of time. before the processing unit authenticates the face.
[8" id="c-fr-0008]
8. Method according to any one of the preceding claims, in which when the processing unit determines a reference pose, the processing unit transmits an order of movement of the face if the pose of the face does not correspond to a view of face of the face.
[9" id="c-fr-0009]
9. Method according to any one of the preceding claims, in which the authentication of the face presented (2) is also conditioned by the determination of the three-dimensional appearance of the face presented (2) by implementing a photogrammetric reconstruction technique based on the movement from at least two images of the face of the image flow corresponding to two different poses.
[10" id="c-fr-0010]
10. Method according to any one of the preceding claims, in which the processing unit determines for the plurality of successive images of the flow of images, a direction of gaze of the face by identifying the orientation of the eyes of said face on the acquired images, the authentication of the face presented (2) is also conditioned to a look in the direction of the visual orientation marker and / or the visual target.
[11" id="c-fr-0011]
11. computer program product comprising program code instructions recorded on a non-transient medium usable in a computer for the execution of the steps of a method according to one of the preceding claims when said program is executed on a computer using said non-transient support.
[12" id="c-fr-0012]
12. Terminal (1) comprising an imager (3), a screen (5) and a processing unit, said terminal being configured to implement a method according to one of claims 1 to 10.
1/3
FIG 1
类似技术:
公开号 | 公开日 | 专利标题
KR101356358B1|2014-01-27|Computer-implemented method and apparatus for biometric authentication based on images of an eye
JP2020064664A|2020-04-23|System for and method of authorizing access to environment under access control
US10601821B2|2020-03-24|Identity authentication method and apparatus, terminal and server
US11256792B2|2022-02-22|Method and apparatus for creation and use of digital identification
US20180034852A1|2018-02-01|Anti-spoofing system and methods useful in conjunction therewith
Tang et al.2018|Face flashing: a secure liveness detection protocol based on light reflections
EP3522053B1|2021-03-31|Method for authenticating a face
US8639058B2|2014-01-28|Method of generating a normalized digital image of an iris of an eye
US20150110366A1|2015-04-23|Methods and systems for determining user liveness
US8682073B2|2014-03-25|Method of pupil segmentation
EP2502211B1|2019-01-02|Method and system for automatically checking the authenticity of an identity document
EP2751739B1|2015-03-04|Detection of fraud for access control system of biometric type
EP2140401B1|2011-05-25|Method of comparing images, notably for iris recognition, implementing at least one quality measurement determined by applying a statistical learning model
US10380418B2|2019-08-13|Iris recognition based on three-dimensional signatures
Benlamoudi et al.2015|Face spoofing detection from single images using active shape models with stasm and lbp
EP3163484B1|2019-11-27|Method for recorded image projection fraud detection in biometric authentication
FR3100074A1|2021-02-26|Method for analyzing a facial feature of a face
FR3053500A1|2018-01-05|METHOD FOR DETECTING FRAUD OF AN IRIS RECOGNITION SYSTEM
FR2875322A1|2006-03-17|Person`s face locating method for e.g. identifying person, involves detecting eyes in digital image produced by performing point to point subtraction between reference image taken before or after eye blink and image taken during blink
Wang et al.2013|An anti-fake iris authentication mechanism for smart glasses
FR3081072A1|2019-11-15|METHOD FOR BIOMETRIC RECOGNITION FROM IRIS
FR3088457A1|2020-05-15|METHOD FOR AUTOMATIC DETECTION OF FACE USURPATION
WO2015056210A2|2015-04-23|Method of authenticating a person
FR3077657A1|2019-08-09|DEVICE AND METHOD FOR DETECTING IDENTIFICATION USURPATION ATTEMPTS
CN110633559A|2019-12-31|Financial security evidence storage platform system and method based on block chain
同族专利:
公开号 | 公开日
KR20190095141A|2019-08-14|
CN110119666A|2019-08-13|
PL3522053T3|2021-10-25|
FR3077658B1|2020-07-17|
US20190244390A1|2019-08-08|
EP3522053A1|2019-08-07|
EP3522053B1|2021-03-31|
US10872437B2|2020-12-22|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20170124385A1|2007-12-31|2017-05-04|Applied Recognition Inc.|Face authentication to mitigate spoofing|
US8457367B1|2012-06-26|2013-06-04|Google Inc.|Facial recognition|
WO2017100929A1|2015-12-15|2017-06-22|Applied Recognition Inc.|Systems and methods for authentication using digital signature with biometrics|
US20100100406A1|2008-10-21|2010-04-22|Beng Lim|Method for protecting personal identity information|
BRPI1101789E2|2011-02-14|2015-12-22|Neti Soluções Tecnologicas Ltda|face access validation system for biometric face recognition|
US8994499B2|2011-03-16|2015-03-31|Apple Inc.|Locking and unlocking a mobile device using facial recognition|
US9122966B2|2012-09-07|2015-09-01|Lawrence F. Glaser|Communication device|
WO2014039932A2|2012-09-07|2014-03-13|Glaser Lawrence F|Credit card form factor secure mobile computer and methods|
CA2883010A1|2014-02-25|2015-08-25|Sal Khan|Systems and methods relating to the authenticity and verification of photographic identity documents|
CN107657653A|2016-07-25|2018-02-02|同方威视技术股份有限公司|For the methods, devices and systems rebuild to the image of three-dimensional surface|
US20180124047A1|2016-10-31|2018-05-03|David L Fisher|High Assurance Remote Identity Proofing|US11087514B2|2019-06-11|2021-08-10|Adobe Inc.|Image object pose synchronization|
CN113033530B|2021-05-31|2022-02-22|成都新希望金融信息有限公司|Certificate copying detection method and device, electronic equipment and readable storage medium|
法律状态:
2019-01-23| PLFP| Fee payment|Year of fee payment: 2 |
2019-08-09| PLSC| Publication of the preliminary search report|Effective date: 20190809 |
2020-01-10| CA| Change of address|Effective date: 20191205 |
2020-01-22| PLFP| Fee payment|Year of fee payment: 3 |
2021-01-20| PLFP| Fee payment|Year of fee payment: 4 |
2022-01-19| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
申请号 | 申请日 | 专利标题
FR1850967|2018-02-06|
FR1850967A|FR3077658B1|2018-02-06|2018-02-06|METHOD FOR AUTHENTICATING A FACE|FR1850967A| FR3077658B1|2018-02-06|2018-02-06|METHOD FOR AUTHENTICATING A FACE|
KR1020190012091A| KR20190095141A|2018-02-06|2019-01-30|Face authentication method|
CN201910102508.5A| CN110119666A|2018-02-06|2019-01-31|Face verification method|
US16/267,110| US10872437B2|2018-02-06|2019-02-04|Face authentication method|
PL19155210T| PL3522053T3|2018-02-06|2019-02-04|Method for authenticating a face|
EP19155210.8A| EP3522053B1|2018-02-06|2019-02-04|Method for authenticating a face|
[返回顶部]